\[Y_{i} = \beta_0 + \beta_1 cses_i + \varepsilon_{i}\]
Mean SD b0 12.762 0.08003 b1 2.192 0.12087 sigma 6.725 0.05648
\[Y_{ij} = \beta_0 + \beta_1 cses_i + \nu_{0i} + \varepsilon_{ij}\]
Mean SD b0[1] 9.955816 0.8507069 b0[2] 13.372271 1.0925979 b0[3] 8.076121 0.8420225 b0[4] 15.557423 1.2288069 b0[5] 13.107298 0.8414050 b0[6] 11.427264 1.0153226 b0[7] 10.140683 1.0704554 b0[8] 18.973798 0.9577321 ... b1 2.192245 0.10753146 mu.int 12.656230 0.24356510 sigma 6.085708 0.05064550 sigma.int 0.117515 0.01433764
\[Y_{ij} = \beta_0 + \beta_1 cses_i + \nu_{0i} + \nu_{1i} +
\varepsilon_{ij}\]
Mean SD b0[1] 9.986826 0.8547047 b0[2] 13.422617 1.1352827 ... b1[158] 2.4861617 0.75520371 b1[159] 2.5727856 0.72138232 b1[160] 2.0203291 0.70894152 mu.int 12.6357940 0.24790906 mu.slope 2.1753626 0.12624520 sigma 6.0597152 0.05286105 sigma.int 2.9698435 0.19077038 sigma.slope 0.8257171 0.16246617
Model 1: \(Y_{i} = \beta_0 + \beta_1 cses_i + \varepsilon_{i}\)
Mean deviance: 47777 penalty 3.134 Penalized deviance: 47781
Model 1: \(Y_{i} = \beta_0 + \beta_1 cses_i + \varepsilon_{i}\)
Mean deviance: 47777 penalty 3.134 Penalized deviance: 47781
Model 2: \(Y_{ij} = \beta_0 + \beta_1 cses_i + \nu_{0i} + \varepsilon_{ij}\)
Mean deviance: 46339 penalty 147.7 Penalized deviance: 46486
Model 1: \(Y_{i} = \beta_0 + \beta_1 cses_i + \varepsilon_{i}\)
Mean deviance: 47777 penalty 3.134 Penalized deviance: 47781
Model 2: \(Y_{ij} = \beta_0 + \beta_1 cses_i + \nu_{0i} + \varepsilon_{ij}\)
Mean deviance: 46339 penalty 147.7 Penalized deviance: 46486
Model 3: \(Y_{ij} = \beta_0 + \beta_1 cses_i + \nu_{0i} + \nu_{1i} + \varepsilon_{ij}\)
Mean deviance: 46279 penalty 190.7 Penalized deviance: 46470
dic_weights(dic_samples, which = c('rint', 'rslope'))
Inf
dic_weights(dic_samples, which = c('maximal', 'rint'))
33488160
BF01_bic(freq$rslope, freq$rint) 3.243075e+228
BF01_bic(freq$maximal, freq$rint) 64.27706 # rint over maximal (!!)
BF01_bic(freq$maximal, freq$rint) 64.27706 # rint over maximal (!!)
BF01_bic(freq$rint, freq$maximal) 64.27706 # rint over maximal (!!)
Data: dat
Models:
freq$rint: mathach ~ cses + (1 | school)
freq$maximal: mathach ~ cses + (cses | school)
Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
freq$rint 4 46728 46756 -23360 46720
freq$maximal 6 46723 46764 -23356 46711 9.4331 2 0.008946 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Data: dat
Models:
freq$rint: mathach ~ cses + (1 | school)
freq$maximal: mathach ~ cses + (cses | school)
Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
freq$rint 4 46728 46756 -23360 46720
freq$maximal 6 46723 46764 -23356 46711 9.4331 2 0.008946 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Data: dat
Models:
freq$rint: mathach ~ cses + (1 | school)
freq$maximal: mathach ~ cses + (cses | school)
Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
freq$rint 4 46728 46756 -23360 46720
freq$maximal 6 46723 46764 -23356 46711 9.4331 2 0.008946 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
\(Y_{ij} = \beta_0 + \beta_1 cses_i + \nu_{0i} + \nu_{1i} + \varepsilon_{ij}\)
95% Highest Density Interval
Mean SD lower higher
b0[1] 9.986826 0.8547047
b0[2] 13.422617 1.1352827
b0[3] 8.035897 0.8398268
...
b1[158] 2.4861617 0.75520371
b1[159] 2.5727856 0.72138232
b1[160] 2.0203291 0.70894152
mu.int 12.6357940 0.24790906 12.17682 13.11603
mu.slope 2.1753626 0.12624520 1.948548 2.442388
sigma 6.0597152 0.05286105 5.956384 6.149032
sigma.int 2.9698435 0.19077038 2.626246 3.268999
sigma.slope 0.8257171 0.16246617 0.5430283 1.127743
BF01_bic(freq$maximal, freq$maxsector) 200507890 savage_dickey(sector_delta, prior = dcauchy(0)) 60719.16
library('rjags') # interface with JAGS
library('coda') # MCMC diagnostics
library('ggmcmc') # nice plots
library('polspline') # get height of posterior
ggs_traceplot(ggs(samples))
gelman.diag(samples)
ggs_density(ggs(samples))
\(BF_{01} = \frac{p(D|H_0)}{p(D|H_1)} = \frac{p(\delta \, = \, 0|D, H_1)}{p(\delta \, = \, 0|H_1)}\)
library('polspline')
savage_dickey <- function(prior) {
fit.posterior <- logspline(samples, lbound = 0)
posterior <- dlogspline(0, fit.posterior)
BF01 <- posterior / prior
1 / BF01
}
1 / savage_dickey(prior = dcauchy(0)) 3.56
savage_dickey(prior = dunif(1, 0, 1000000)) Inf
savage_dickey(prior = dunif(1, 0, 1000000)) Inf
savage_dickey(prior = dunif(1, 0, 1000000)) Inf
DeGroot, M. H. (1982). Lindley’s paradox: Comment. Journal of the American Statistical Association, 336–339.
Jaeger, T. F. (2008). Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models. Journal of Memory and Language, 59(4), 434–446.
Lindley, D. V. (1957). A statistical paradox. Biometrika, 187–192.
Rouder, J. N., & Morey, R. D. (2012). Default Bayes factors for model selection in regression. Multivariate Behavioral Research, 47(6), 877–903.
Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779–804.
Wagenmakers, E.-J., & Farrell, S. (2004). AIC model selection using Akaike weights. Psychonomic Bulletin & Review, 11(1), 192–196.
Wagenmakers, E.-J., Lodewyckx, T., Kuriyal, H., & Grasman, R. (2010). Bayesian hypothesis testing for psychologists: A tutorial on the Savage–Dickey method. Cognitive Psychology, 60(3), 158–189.